RDF知识图(kg)是强大的数据结构,可以表示由异质数据源创建的事实语句。 KG创建很费力,需要有效地执行数据管理技术。本文解决了自动生成KG创建过程的问题;它提出了在RDF映射语言(RML)中指定的映射断言之后,用于计划和将异质数据计划和转换为RDF三元组的技术。给定一组映射断言,计划者通过分区和安排断言的执行来提供优化的执行计划。首先,考虑到数据源数量,映射断言的类型以及不同断言之间的关联,计划者评估了优化数量的分区数量。在提供属于每个分区的分区和断言列表之后,计划者确定其执行命令。实施了一种贪婪的算法来生成分区的浓密树执行计划。浓密的树计划被转化为操作系统命令,以指导在灌木树指示的顺序中执行映射断言的分区。提出的优化方法是根据最新的RML兼容发动机以及数据源和RML三元图的现有基准进行评估的。我们的实验结果表明,所研究的引擎的性能可以大大改善,尤其是在具有大量三元图和大数据源的复杂环境中。结果,在复杂案例中超时的引擎可以至少生产出应用计划者的一部分。
translated by 谷歌翻译
尽管编码了大量丰富和有价值的数据,但现有的数据来源主要是独立创建的,这是他们整合的重大挑战。映射语言,例如RML和R2RML,促进了将Meta-Data和将数据集成到知识图中的过程的声明性规范。除了在数据源和统一模式中表达对应关系之外,映射规则还可以包括知识提取功能。组合映射规则和函数表示强大的形式主义,以指定流水管以透明地将数据集成到知识图中。令人惊讶的是,这些形式主义没有完全调整,并且通过将ad-hoc程序执行到预处理和集成数据来创建许多知识图表。在本文中,我们提出了Eablock,一种方法将实体对齐(EA)集成为RML映射规则的一部分。 eAblock包括执行从文本属性的实体识别的功能块,并将识别的实体链接到Wikidata,DBPedia和域特定词库中的相应资源,例如UML。 EABLOCK提供可靠性和有效的技术来评估功能并转移映射以促进其在任何符合RML标准的发动机中的应用。我们有经验评估的eAblock性能,结果表明eAblock加快了需要实体识别和链接在符合最先进的RML标准的发动机的知识图形创建管道。 Eablock还通过Github存储库(https:/github.com/sdm-tib/eablock)和doi(https://doi.org/10.5281/zenodo.5779777)作为工具被公开可用作工具。
translated by 谷歌翻译
State-of-the-art object detectors are fast and accurate, but they require a large amount of well annotated training data to obtain good performance. However, obtaining a large amount of training annotations specific to a particular task, i.e., fine-grained annotations, is costly in practice. In contrast, obtaining common-sense relationships from text, e.g., "a table-lamp is a lamp that sits on top of a table", is much easier. Additionally, common-sense relationships like "on-top-of" are easy to annotate in a task-agnostic fashion. In this paper, we propose a probabilistic model that uses such relational knowledge to transform an off-the-shelf detector of coarse object categories (e.g., "table", "lamp") into a detector of fine-grained categories (e.g., "table-lamp"). We demonstrate that our method, RelDetect, achieves performance competitive to finetuning based state-of-the-art object detector baselines when an extremely low amount of fine-grained annotations is available ($0.2\%$ of entire dataset). We also demonstrate that RelDetect is able to utilize the inherent transferability of relationship information to obtain a better performance ($+5$ mAP points) than the above baselines on an unseen dataset (zero-shot transfer). In summary, we demonstrate the power of using relationships for object detection on datasets where fine-grained object categories can be linked to coarse-grained categories via suitable relationships.
translated by 谷歌翻译
A normalizing flow (NF) is a mapping that transforms a chosen probability distribution to a normal distribution. Such flows are a common technique used for data generation and density estimation in machine learning and data science. The density estimate obtained with a NF requires a change of variables formula that involves the computation of the Jacobian determinant of the NF transformation. In order to tractably compute this determinant, continuous normalizing flows (CNF) estimate the mapping and its Jacobian determinant using a neural ODE. Optimal transport (OT) theory has been successfully used to assist in finding CNFs by formulating them as OT problems with a soft penalty for enforcing the standard normal distribution as a target measure. A drawback of OT-based CNFs is the addition of a hyperparameter, $\alpha$, that controls the strength of the soft penalty and requires significant tuning. We present JKO-Flow, an algorithm to solve OT-based CNF without the need of tuning $\alpha$. This is achieved by integrating the OT CNF framework into a Wasserstein gradient flow framework, also known as the JKO scheme. Instead of tuning $\alpha$, we repeatedly solve the optimization problem for a fixed $\alpha$ effectively performing a JKO update with a time-step $\alpha$. Hence we obtain a "divide and conquer" algorithm by repeatedly solving simpler problems instead of solving a potentially harder problem with large $\alpha$.
translated by 谷歌翻译
Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time -- suffix prediction -- . Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, not learning from the whole suffix during the training phase. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that improves the selection of the activity for each index of the suffix. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.
translated by 谷歌翻译
Tourette Syndrome (TS) is a behavior disorder that onsets in childhood and is characterized by the expression of involuntary movements and sounds commonly referred to as tics. Behavioral therapy is the first-line treatment for patients with TS, and it helps patients raise awareness about tic occurrence as well as develop tic inhibition strategies. However, the limited availability of therapists and the difficulties for in-home follow up work limits its effectiveness. An automatic tic detection system that is easy to deploy could alleviate the difficulties of home-therapy by providing feedback to the patients while exercising tic awareness. In this work, we propose a novel architecture (T-Net) for automatic tic detection and classification from untrimmed videos. T-Net combines temporal detection and segmentation and operates on features that are interpretable to a clinician. We compare T-Net to several state-of-the-art systems working on deep features extracted from the raw videos and T-Net achieves comparable performance in terms of average precision while relying on interpretable features needed in clinical practice.
translated by 谷歌翻译
我们研究了基于功能的新闻企业问题,其中决策者可以访问包括需求观察和外源特征组成的历史数据。在这种情况下,我们研究了功能选择,旨在得出具有改进样本外部性能的稀疏,可解释的模型。到目前为止,最新的方法利用正则化,这会惩罚所选特征的数量或解决方案向量的规范。作为替代方案,我们介绍了一种新型的双层编程公式。高级问题选择了一部分功能,这些功能将基于固定验证集的订购决策的样本外成本估算最小化。下层问题仅使用上层选择的功能,了解训练集中决策功能的最佳系数。我们为Bilevel程序提供了混合整数线性程序重新制定,可以通过标准优化求解器求解为最佳性。我们的计算实验表明,该方法准确地恢复了几百个观察结果的实例中的基础真相。相反,基于正则化的技术通常在功能恢复时失败,或者需要数千个观察值才能获得相似的准确性。关于样本外的概括,我们实现了改进或可比的成本绩效。
translated by 谷歌翻译
双簇算法分区数据并同时协变量,提供了几个领域的新见解,例如分析基因表达以发现新的生物学功能。本文使用能量距离(ED)和最大平均差异(MMD)的概念在抽象空间中开发了一种新的无模型双簇算法 - 能够处理复杂数据(例如曲线或图形)的概率分布之间的两个距离。所提出的方法比大多数现有文献方法都可以学习更多的通用和复杂的群集形状,这些方法通常着重于检测均值和方差差异。尽管我们的方法的两次簇配置受到限制,以在基准和协变量级别创建不相交结构,但结果是竞争性的。我们的结果与最佳场景中的最新方法相似,假设有适当的内核选择,当群集差异集中在高阶矩中时,它们的表现优于它们。该模型的性能已在涉及模拟和现实世界数据集的几种情况下进行了测试。最后,使用最佳运输理论的一些工具确定了新的理论一致性结果。
translated by 谷歌翻译
这项工作总结了2022年2022年国际生物识别联合会议(IJCB 2022)的IJCB被遮挡的面部识别竞赛(IJCB-OCFR-2022)。OCFR-2022从学术界吸引了总共3支参与的团队。最终,提交了六个有效的意见书,然后由组织者评估。在严重的面部阻塞面前,举行了竞争是为了应对面部识别的挑战。参与者可以自由使用任何培训数据,并且通过使用众所周知的数据集构成面部图像的部分来构建测试数据。提交的解决方案提出了创新,并以所考虑的基线表现出色。这项竞争的主要输出是具有挑战性,现实,多样化且公开可用的遮挡面部识别基准,并具有明确的评估协议。
translated by 谷歌翻译
在执行现实生活过程中,计划或意外的变化是常见的。检测这些更改是优化运行此类过程的组织的性能的必要条件。最先进的大多数算法都集中在突然变化的检测上,抛开其他类型的变化。在本文中,我们将专注于自动检测渐进漂移,这是一种特殊的变化类型,其中两个模型的情况在一段时间内重叠。所提出的算法依赖于一致性检查指标来自动检测变化,还将这些变化的全自动分类为突然或逐渐分类。该方法已通过一个由120个日志组成的合成数据集进行了验证,该数据集具有不同的变化分布,在检测和分类准确性,延迟和变化区域在比较主要的最新算法方面取得更好的结果。
translated by 谷歌翻译